Anthropic and OpenAI Tighten Security Amid Advanced AI Hacking Risks
Artificial intelligence firms Anthropic and OpenAI are escalating defensive measures as their models demonstrate alarming capabilities in bypassing cybersecurity systems. Anthropic now mandates government ID verification for sensitive functions, partnering with Persona for identity authentication while pledging not to use collected data for AI training. The clampdown follows internal testing revealing Claude Mythos Preview's sophisticated hacking aptitude.
OpenAI has taken a parallel approach, releasing specialized models exclusively for security experts to fortify vulnerable infrastructure. What began as tools for whimsical applications like generating Ghibli-style art has rapidly evolved into a national security priority. The industry's pivot reflects growing institutional recognition of AI's dual-use potential in both defensive and offensive cyber operations.
Log in to Reply
Log in to comment your thoughtsComments
Related Articles
|Square
Get the BTCC app to start your crypto journey
Get started today Scan to join our 100M+ users